Goto

Collaborating Authors

 multi-source domain adaptation



YourClassifiercanSecretlySuffice Multi-SourceDomainAdaptation

Neural Information Processing Systems

A common approach [15, 43, 61] is to learn a shared feature extractor, along with domain-specific classifier modules (Figure 1a), which yield an ensemble prediction for the target samples.


Multi-source Domain Adaptation for Semantic Segmentation

Neural Information Processing Systems

Simulation-to-real domain adaptation for semantic segmentation has been actively studied for various applications such as autonomous driving. Existing methods mainly focus on a single-source setting, which cannot easily handle a more practical scenario of multiple sources with different distributions. In this paper, we propose to investigate multi-source domain adaptation for semantic segmentation. Specifically, we design a novel framework, termed Multi-source Adversarial Domain Aggregation Network (MADAN), which can be trained in an end-to-end manner. First, we generate an adapted domain for each source with dynamic semantic consistency while aligning at the pixel-level cycle-consistently towards the target. Second, we propose sub-domain aggregation discriminator and cross-domain cycle discriminator to make different adapted domains more closely aggregated. Finally, feature-level alignment is performed between the aggregated domain and target domain while training the segmentation network. Extensive experiments from synthetic GTA and SYNTHIA to real Cityscapes and BDDS datasets demonstrate that the proposed MADAN model outperforms state-of-the-art approaches. Our source code is released at: https://github.com/Luodian/MADAN.


Your Classifier can Secretly Suffice Multi-Source Domain Adaptation

Neural Information Processing Systems

Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain, under a domain-shift. Existing methods aim to minimize this domain-shift using auxiliary distribution alignment objectives. In this work, we present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision. Thus, we aim to utilize implicit alignment without additional training objectives to perform adaptation. To this end, we use pseudo-labeled target samples and enforce a classifier agreement on the pseudo-labels, a process called Self-supervised Implicit Alignment (SImpAl). We find that SImpAl readily works even under category-shift among the source domains. Further, we propose classifier agreement as a cue to determine the training convergence, resulting in a simple training algorithm. We provide a thorough evaluation of our approach on five benchmarks, along with detailed insights into each component of our approach.



Progressive Multi-Source Domain Adaptation for Personalized Facial Expression Recognition

Zeeshan, Muhammad Osama, Pedersoli, Marco, Koerich, Alessandro Lameiras, Granger, Eric

arXiv.org Artificial Intelligence

Abstract--Personalized facial expression recognition (FER) involves adapting a machine learning model using samples from labeled sources and unlabeled target domains. Given the challenges of recognizing subtle expressions with considerable interpersonal variability, state-of-the-art unsupervised domain adaptation (UDA) methods focus on the multi-source UDA (MSDA) setting, where each domain corresponds to a specific subject, and improve model accuracy and robustness. State-of-the-art MSDA methods for FER address this domain shift by considering all the sources to adapt to the target representations. Nevertheless, adapting to a target subject presents significant challenges due to large distributional differences between source and target domains, often resulting in negative transfer . In addition, integrating all sources simultaneously increases computational costs and causes misalignment with the target. T o address these issues, we propose a progressive MSDA approach that gradually introduces information from subjects (source domains) based on their similarity to the target subject. This will ensure that only the most relevant sources from the target are selected, which helps avoid the negative transfer caused by dissimilar sources. During adaptation, the source domains are introduced in a curriculum manner . We first exploit the closest sources to reduce the distribution shift with the target and then move towards the furthest while only considering the most relevant sources based on the predetermined threshold. Furthermore, to mitigate catastrophic forgetting caused by the incremental introduction of source subjects, we implemented a density-based memory mechanism that preserves the most relevant historical source samples for adaptation. Further, performance is evaluated on a cross-dataset setting (UNBC-McMaster BioVid), showing the importance of gradually adapting to source subjects. N recent years, there has been a growing demand for deep learning (DL) models that can perform well on FER across various industrial sectors such as in detecting suspicious or criminal behavior, automated emotion recognition, or the estimation of pain in health care [1]-[4]. The authors are affiliated with the LIVIA and ILLS, the Department of Systems Engineering, and the Department of Software Engineering at ETS Montreal, Canada. Therefore, adapting a deep FER model to a specific individual (i.e., personalization) is important to maintain a high level of performance. Personalized FER has been extensively studied in the literature, primarily through supervised learning approaches and fine-tuning techniques [6]-[8] to capture individual-specific nuances. These approaches mostly rely on fully or weakly labeled data to adapt and create a personalized model for each subject.



Supplementary: Y our Classifier can Secretly Suffice Multi-Source Domain Adaptation

Neural Information Processing Systems

Owing to the limits of space, we present a summary of results on DomainNet in the paper. The results for the prior arts are reported from [9]. Finally, we study thresholding schemes. We find that SImpAl works well even under category-shift. Our approach exhibits a relatively lower drop in accuracy.